|
Publication bias is a type of bias with regard to what academic research is likely to be published, among what is available to be published. Publication bias is of interest because literature reviews of claims about support for a hypothesis, or values for a parameter will themselves be biased if the original literature is contaminated by publication bias.〔 While some preferences are desirable – for instance a bias against publication of flawed studies –a tendency of researchers, and journal editors to prefer some outcomes rather than others e.g. results showing a significant finding, leads to a problematic bias in the published literature. Studies with significant results often do not appear to be superior to studies with a null result with respect to quality of design. However, statistically significant results have been shown to be three times more likely to be published compared to papers with null results. Multiple factors contribute to publication bias.〔 For instance, once a result is well established, it may become newsworthy to publish papers affirming the null result. It has been found that the most common reason for non-publication is investigators declining to submit results for publication. Factors cited as underlying this effect include investigators assuming they must have made a mistake, to not find a known finding, loss of interest in the topic, or anticipation that others will be uninterested in the null results.〔 Attempts to identify unpublished studies often prove difficult or are unsatisfactory.〔H. Rothstein, A. J. Sutton and M. Borenstein. (2005). ''Publication bias in meta-analysis: prevention, assessment and adjustments''. Wiley. Chichester, England ; Hoboken, NJ.〕 One effort to decrease this problem is the reflected in the move by some journals to require that studies submitted for publication are pre-registered (registering a study prior to collection of data and analysis). Several such registries exist, for instance the Center for Open Science. Strategies are being developed to detect and control for publication bias,〔 for instance down-weighting small and non-randomised studies because of their demonstrated high susceptibility to error and bias,〔 and p-curve analysis ==Definition== Publication bias occurs when the publication of research results depends not just on the quality of the research but on the hypothesis tested, and the significance and direction of effects detected. The term "publication bias" appears to have been first used in 1959 by statistician Theodore Sterling to refer to fields in which successful research is more likely to be published. As a result, "''the literature of such a field consists in substantial part of false conclusions resulting from (errors )''". Publication bias is sometimes called the "file drawer effect", or "file drawer problem". The origin of this term is that results not supporting the hypotheses of researchers often go no further than the researchers' file drawers, leading to a bias in published research. The term "file drawer problem" was coined by the psychologist Robert Rosenthal in 1979.〔Rosenthal R. File drawer problem and tolerance for null results. Psychol Bull 1979;86:638-41.〕 Positive-results bias, a type of publication bias, occurs when authors are more likely to submit, or editors accept, positive compared to negative or inconclusive results. Outcome-reporting bias occurs when multiple outcomes are measured and analyzed, but where reporting of these outcomes is dependent on the strength and direction of the result for that outcome. A generic term coined to describe these post-hoc choices is HARKing ("Hypothesizing After the Results are Known"). 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Publication bias」の詳細全文を読む スポンサード リンク
|